robust proper learning
Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin
We study the problem of {\em properly} learning large margin halfspaces in the agnostic PAC model. In more detail, we study the complexity of properly learning $d$-dimensional halfspaces on the unit ball within misclassification error $\alpha \cdot \opt_{\gamma} + \eps$, where $\opt_{\gamma}$ is the optimal $\gamma$-margin error rate and $\alpha \geq 1$ is the approximation ratio. We give learning algorithms and computational hardness results for this problem, for all values of the approximation ratio $\alpha \geq 1$, that are nearly-matching for a range of parameters. Specifically, for the natural setting that $\alpha$ is any constant bigger than one, we provide an essentially tight complexity characterization. On the positive side, we give an $\alpha = 1.01$-approximate proper learner that uses $O(1/(\eps^2\gamma^2))$ samples (which is optimal) and runs in time $\poly(d/\eps) \cdot 2^{\tilde{O}(1/\gamma^2)}$.
Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin
We study the problem of {\em properly} learning large margin halfspaces in the agnostic PAC model. In more detail, we study the complexity of properly learning d -dimensional halfspaces on the unit ball within misclassification error \alpha \cdot \opt_{\gamma} \eps, where \opt_{\gamma} is the optimal \gamma -margin error rate and \alpha \geq 1 is the approximation ratio. We give learning algorithms and computational hardness results for this problem, for all values of the approximation ratio \alpha \geq 1, that are nearly-matching for a range of parameters. Specifically, for the natural setting that \alpha is any constant bigger than one, we provide an essentially tight complexity characterization.
Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin
Diakonikolas, Ilias, Kane, Daniel, Manurangsi, Pasin
We study the problem of {\em properly} learning large margin halfspaces in the agnostic PAC model. In more detail, we study the complexity of properly learning $d$-dimensional halfspaces on the unit ball within misclassification error $\alpha \cdot \opt_{\gamma} \eps$, where $\opt_{\gamma}$ is the optimal $\gamma$-margin error rate and $\alpha \geq 1$ is the approximation ratio. We give learning algorithms and computational hardness results for this problem, for all values of the approximation ratio $\alpha \geq 1$, that are nearly-matching for a range of parameters. Specifically, for the natural setting that $\alpha$ is any constant bigger than one, we provide an essentially tight complexity characterization. On the positive side, we give an $\alpha 1.01$-approximate proper learner that uses $O(1/(\eps 2\gamma 2))$ samples (which is optimal) and runs in time $\poly(d/\eps) \cdot 2 {\tilde{O}(1/\gamma 2)}$.